397 research outputs found

    NOMINAL LEXEMES IN DERIVATIONAL REDUPLICATION OF MANDARIN

    Get PDF
     Reduplication is a one of the creative ways in Mandarin. People usually reduplicate to express more deep lexical meaning or semantic meaning vividly. Reduplication is a process of forming new words by repeating an entire free morpheme (total reduplication) or part of it (partial reduplication). Morphologically, functions of Chinese reduplication may place categorically within the derivational domain of lexemes. In fact, whereas derivation typically forms new lexemes and can be category changing, reduplication often conveys values typically found in the inflectional domain. Using the M.D.S Simatupang context-free approach, this study clarifies various meanings of Chinese reduplication, then based on word membership tests and lexical decomposition tests as proposed by J.W.M Verhaar, this study analyzes derivational reduplication and inflectional reduplication in Mandarin. This research achieves shed new light on the reduplicative processes. As result, in nominal domain, reduplication gives as a result plural noun. In derivational domain, nominal lexeme in reduplication has flexible distribution of lexical items. ABSTRACTReduplication is a one of the creative ways in Mandarin. People usually reduplicate to express more deep lexical meaning or semantic meaning vividly. Reduplication is a process of forming new words by repeating an entire free morpheme (total reduplication) or part of it (partial reduplication). Morphologically, functions of Chinese reduplication may place categorically within the derivational domain of lexemes. In fact, whereas derivation typically forms new lexemes and can be category changing, reduplication often conveys values typically found in the inflectional domain. Using the M.D.S Simatupang context-free approach, this study clarifies various meanings of Chinese reduplication, then based on word membership tests and lexical decomposition tests as proposed by J.W.M Verhaar, this study analyzes derivational reduplication and inflectional reduplication in Mandarin. This research achieves shed new light on the reduplicative processes. As result, in nominal domain, reduplication gives as a result plural noun. In derivational domain, nominal lexeme in reduplication has flexible distribution of lexical items

    Nominal Lexemes Morphemes Reduplication of Mandarin

    Get PDF
    Morphologically, functions of Chinese reduplication may place categorically within the derivational domain of lexemes. In fact, whereas derivation typically forms new lexemes and can be category changing, reduplication often conveys values typically found in the inflectional domain. By using the test of categorical word and test of lexical decomposition, this research achieves shed new light on the reduplicative processes. As a result, in the nominal domain, reduplication gives as a result plural noun. In the derivational domain, nominal lexeme in reduplication has a flexible distribution of lexical items. In this case, the types of reduplication of noun in Mandarin show the process of AA, AAB, and AABB patterns

    Mandarin has subjectivity-based adjective ordering preferences in the presence of de

    Get PDF
    We investigate adjective ordering preferences in Mandarin, a language that has been claimed to have English-like preferences, but only in the absence of the linking particle de (Sproat & Shih 1991). Extending the experimental methodology of Scontras et al. (2017), we find evidence of robust adjective ordering preferences in Mandarin when de is present. Moreover, the Mandarin preferences are predicted by adjective subjectivity, as in English and other unrelated languages

    Fairness-Aware Client Selection for Federated Learning

    Full text link
    Federated learning (FL) has enabled multiple data owners (a.k.a. FL clients) to train machine learning models collaboratively without revealing private data. Since the FL server can only engage a limited number of clients in each training round, FL client selection has become an important research problem. Existing approaches generally focus on either enhancing FL model performance or enhancing the fair treatment of FL clients. The problem of balancing performance and fairness considerations when selecting FL clients remains open. To address this problem, we propose the Fairness-aware Federated Client Selection (FairFedCS) approach. Based on Lyapunov optimization, it dynamically adjusts FL clients' selection probabilities by jointly considering their reputations, times of participation in FL tasks and contributions to the resulting model performance. By not using threshold-based reputation filtering, it provides FL clients with opportunities to redeem their reputations after a perceived poor performance, thereby further enhancing fair client treatment. Extensive experiments based on real-world multimedia datasets show that FairFedCS achieves 19.6% higher fairness and 0.73% higher test accuracy on average than the best-performing state-of-the-art approach.Comment: Accepted by ICME 202

    Towards Fairness-Aware Federated Learning

    Full text link
    Recent advances in Federated Learning (FL) have brought large-scale collaborative machine learning opportunities for massively distributed clients with performance and data privacy guarantees. However, most current works focus on the interest of the central controller in FL,and overlook the interests of the FL clients. This may result in unfair treatment of clients which discourages them from actively participating in the learning process and damages the sustainability of the FL ecosystem. Therefore, the topic of ensuring fairness in FL is attracting a great deal of research interest. In recent years, diverse Fairness-Aware FL (FAFL) approaches have been proposed in an effort to achieve fairness in FL from different perspectives. However, there is no comprehensive survey which helps readers gain insight into this interdisciplinary field. This paper aims to provide such a survey. By examining the fundamental and simplifying assumptions, as well as the notions of fairness adopted by existing literature in this field, we propose a taxonomy of FAFL approaches covering major steps in FL, including client selection, optimization, contribution evaluation and incentive distribution. In addition, we discuss the main metrics for experimentally evaluating the performance of FAFL approaches, and suggest promising future research directions towards fairness-aware federated learning.Comment: 16 pages, 4 figure

    Distinction Between Inflection and Derivation of Learning Reduplication in Mandarin

    Full text link
    Reduplication as a word-formation process in Mandarin, which is one of the most difficult knowledge to comprehend for scholar and student. Theoretically this research offers an approach that is different from what has been made by previous researchers. Using the M.D.S Simatupang free context approach this research contrasts the reduplicative forms of all word classes and shows the relationships between them (AA, AABB, ABAB, ABB) and their basic forms (A, AB), then based on test of categorical word and test of lexical decomposition as proposed by J.W.M Verhaar, this study analyzes and explains reduplication and inflectional reduplication in Mandarin in order to students understand as their meaning vocabularies. As a result, this research examines the derivational and inflectional reduplication in Mandarin all at once can disseminate the use of morphological theory. In addition, this study discusses Mandarin reduplication based on various word classes that are contained as a basis for the relevant form of reduplication. Beginner research results will be presented in this study in order to stimulate more complete writing, it will be better if this research can be disseminated in order to add learning and reading material for future research

    Settling the Sample Complexity of Model-Based Offline Reinforcement Learning

    Full text link
    This paper is concerned with offline reinforcement learning (RL), which learns using pre-collected data without further exploration. Effective offline RL would be able to accommodate distribution shift and limited data coverage. However, prior algorithms or analyses either suffer from suboptimal sample complexities or incur high burn-in cost to reach sample optimality, thus posing an impediment to efficient offline RL in sample-starved applications. We demonstrate that the model-based (or "plug-in") approach achieves minimax-optimal sample complexity without burn-in cost for tabular Markov decision processes (MDPs). Concretely, consider a finite-horizon (resp. γ\gamma-discounted infinite-horizon) MDP with SS states and horizon HH (resp. effective horizon 11−γ\frac{1}{1-\gamma}), and suppose the distribution shift of data is reflected by some single-policy clipped concentrability coefficient Cclipped⋆C^{\star}_{\text{clipped}}. We prove that model-based offline RL yields ε\varepsilon-accuracy with a sample complexity of {H4SCclipped⋆ε2(finite-horizon MDPs)SCclipped⋆(1−γ)3ε2(infinite-horizon MDPs) \begin{cases} \frac{H^{4}SC_{\text{clipped}}^{\star}}{\varepsilon^{2}} & (\text{finite-horizon MDPs}) \frac{SC_{\text{clipped}}^{\star}}{(1-\gamma)^{3}\varepsilon^{2}} & (\text{infinite-horizon MDPs}) \end{cases} up to log factor, which is minimax optimal for the entire ε\varepsilon-range. The proposed algorithms are ``pessimistic'' variants of value iteration with Bernstein-style penalties, and do not require sophisticated variance reduction. Our analysis framework is established upon delicate leave-one-out decoupling arguments in conjunction with careful self-bounding techniques tailored to MDPs
    • …
    corecore